8 research outputs found

    Optimisation for Optical Data Centre Switching and Networking with Artificial Intelligence

    Get PDF
    Cloud and cluster computing platforms have become standard across almost every domain of business, and their scale quickly approaches O(106)\mathbf{O}(10^6) servers in a single warehouse. However, the tier-based opto-electronically packet switched network infrastructure that is standard across these systems gives way to several scalability bottlenecks including resource fragmentation and high energy requirements. Experimental results show that optical circuit switched networks pose a promising alternative that could avoid these. However, optimality challenges are encountered at realistic commercial scales. Where exhaustive optimisation techniques are not applicable for problems at the scale of Cloud-scale computer networks, and expert-designed heuristics are performance-limited and typically biased in their design, artificial intelligence can discover more scalable and better performing optimisation strategies. This thesis demonstrates these benefits through experimental and theoretical work spanning all of component, system and commercial optimisation problems which stand in the way of practical Cloud-scale computer network systems. Firstly, optical components are optimised to gate in ≈500ps\approx 500 ps and are demonstrated in a proof-of-concept switching architecture for optical data centres with better wavelength and component scalability than previous demonstrations. Secondly, network-aware resource allocation schemes for optically composable data centres are learnt end-to-end with deep reinforcement learning and graph neural networks, where 3×3\times less networking resources are required to achieve the same resource efficiency compared to conventional methods. Finally, a deep reinforcement learning based method for optimising PID-control parameters is presented which generates tailored parameters for unseen devices in O(10−3)s\mathbf{O}(10^{-3}) s. This method is demonstrated on a market leading optical switching product based on piezoelectric actuation, where switching speed is improved >20%>20\% with no compromise to optical loss and the manufacturing yield of actuators is improved. This method was licensed to and integrated within the manufacturing pipeline of this company. As such, crucial public and private infrastructure utilising these products will benefit from this work

    Network Aware Compute and Memory Allocation in Optically Composable Data Centres with Deep Reinforcement Learning and Graph Neural Networks

    Full text link
    Resource-disaggregated data centre architectures promise a means of pooling resources remotely within data centres, allowing for both more flexibility and resource efficiency underlying the increasingly important infrastructure-as-a-service business. This can be accomplished by means of using an optically circuit switched backbone in the data centre network (DCN); providing the required bandwidth and latency guarantees to ensure reliable performance when applications are run across non-local resource pools. However, resource allocation in this scenario requires both server-level \emph{and} network-level resource to be co-allocated to requests. The online nature and underlying combinatorial complexity of this problem, alongside the typical scale of DCN topologies, makes exact solutions impossible and heuristic based solutions sub-optimal or non-intuitive to design. We demonstrate that \emph{deep reinforcement learning}, where the policy is modelled by a \emph{graph neural network} can be used to learn effective \emph{network-aware} and \emph{topologically-scalable} allocation policies end-to-end. Compared to state-of-the-art heuristics for network-aware resource allocation, the method achieves up to 20%20\% higher acceptance ratio; can achieve the same acceptance ratio as the best performing heuristic with 3×3\times less networking resources available and can maintain all-around performance when directly applied (with no further training) to DCN topologies with 102×10^2\times more servers than the topologies seen during training.Comment: 10 pages + 1 appendix page, 8 figure

    Optimal Control of SOAs with Artificial Intelligence for Sub-Nanosecond Optical Switching

    Get PDF
    Novel approaches to switching ultra-fast semiconductor optical amplifiers using artificial intelligence algorithms (particle swarm optimisation, ant colony optimisation, and a genetic algorithm) are developed and applied both in simulation and experiment. Effective off-on switching (settling) times of 542 ps are demonstrated with just 4.8% overshoot, achieving an order of magnitude improvement over previous attempts described in the literature and standard dampening techniques from control theory.Comment: This manuscript was accepted for publication in the IEEE/OSA Journal of Lightwave Technology on 21st June 2020. Open access code: https://github.com/cwfparsonson/soa_driving Open access data: https://doi.org/10.5522/04/12356696.v

    Techniques for applying reinforcement learning to routing and wavelength assignment problems in optical fiber communication networks

    Get PDF
    We propose a novel application of reinforcement learning (RL) with invalid action masking and a novel training methodology for routing and wavelength assignment (RWA) in fixed-grid optical networks and demonstrate the generalizability of the learned policy to a realistic traffic matrix unseen during training. Through the introduction of invalid action masking and a new training method, the applicability of RL to RWA in fixed-grid networks is extended from considering connection requests between nodes to servicing demands of a given bit rate, such that lightpaths can be used to service multiple demands subject to capacity constraints. We outline the additional challenges involved for this RWA problem, for which we found that standard RL had low performance compared to that of baseline heuristics, in comparison with the connection requests RWA problem considered in the literature. Thus, we propose invalid action masking and a novel training method to improve the efficacy of the RL agent. With invalid action masking, domain knowledge is embedded in the RL model to constrain the action space of the RL agent to lightpaths that can support the current request, reducing the size of the action space and thus increasing the efficacy of the agent. In the proposed training method, the RL model is trained on a simplified version of the problem and evaluated on the target RWA problem, increasing the efficacy of the agent compared with training directly on the target problem. RL with invalid action masking and this training method outperforms standard RL and three state-of-the-art heuristics, namely, k shortest path first fit, first-fit k shortest path, and k shortest path most utilized, consistently across uniform and nonuniform traffic in terms of the number of accepted transmission requests for two real-world core topologies, NSFNET and COST - 239. The RWA runtime of the proposed RL model is comparable to that of these heuristic approaches, demonstrating the potential for real-world applicability. Moreover, we show that the RL agent trained on uniform traffic is able to generalize well to a realistic nonuniform traffic distribution not seen during training, thus outperforming the heuristics for this traffic. Visualization of the learned RWA policy reveals an RWA strategy that differs significantly from those of the heuristic baselines in terms of the distribution of services across channels and the distribution across links
    corecore